8,449 research outputs found
Estimation from quantized Gaussian measurements: when and how to use dither
Subtractive dither is a powerful method for removing the signal dependence of quantization noise for coarsely quantized signals. However, estimation from dithered measurements often naively applies the sample mean or midrange, even when the total noise is not well described with a Gaussian or uniform distribution. We show that the generalized Gaussian distribution approximately describes subtractively dithered, quantized samples of a Gaussian signal. Furthermore, a generalized Gaussian fit leads to simple estimators based on order statistics that match the performance of more complicated maximum likelihood estimators requiring iterative solvers. The order statistics-based estimators outperform both the sample mean and midrange for nontrivial sums of Gaussian and uniform noise. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. Specifically, we find subtractive dither to be beneficial when the ratio between the Gaussian standard deviation and quantization interval length is roughly less than one-third. When that ratio is also greater than 0.822/K^0.930 for the number of measurements K > 20, estimators we present are more efficient than the midrange.https://arxiv.org/abs/1811.06856Accepted manuscrip
Medium Modifications of the Rho Meson at CERN/SPS Energies
Rho meson propagation in hot hadronic matter is studied in a model with
coupling to states. Medium modifications are induced by a change of
the pion dispersion relation through collisions with nucleons and in
the fireball. Maintaining gauge invariance dilepton production is calculated
and compared to the recent data of the CERES collaboration in central S+Au
collisions at 200 GeV/u. The observed enhancement of the rate below the rho
meson mass can be largely accounted for.Comment: 10 pages RevTeX and 2 figures (uuencoded .ps-files
Low-mass dielectrons from the PHENIX experiment at RHIC
The production of the low-mass dielectrons is considered to be a powerful
tool to study the properties of the hot and dense matter created in the
ultra-relativistic heavy-ion collisions. We present the preliminary results on
the first measurements of the low-mass dielectron continuum in Au+Au collisions
and the phi meson production measured in Au+Au and d+Au collisions at
sqrt{s_NN} = 200 GeV performed by the PHENIX experiment.Comment: 6 pages, 12 figures, conference proceedings for QNP06 (5-10 June,
2006, Madrid
Photon production in relativistic nuclear collisions at SPS and RHIC energies
Chiral Lagrangians are used to compute the production rate of photons from
the hadronic phase of relativistic nuclear collisions. Special attention is
paid to the role of the pseudovector a_1 meson. Calculations that include
reactions with strange mesons, hadronic form factors and vector spectral
densities consistent with dilepton production, as well as the emission from a
quark-gluon plasma and primordial nucleon-nucleon collisions, reproduce the
photon spectra measured at the Super Proton Synchrotron (SPS). Predictions for
the Relativistic Heavy Ion Collider (RHIC) are made.Comment: Work presented at the 26th annual Montreal-Rochester-Syracuse-Toronto
conference (MRST 2004) on high energy physics, Montreal, QC, Canada, 12-14
May 2004. 8 pages, 3 figure
Dead Time Compensation for High-Flux Ranging
Dead time effects have been considered a major limitation for fast data
acquisition in various time-correlated single photon counting applications,
since a commonly adopted approach for dead time mitigation is to operate in the
low-flux regime where dead time effects can be ignored. Through the application
of lidar ranging, this work explores the empirical distribution of detection
times in the presence of dead time and demonstrates that an accurate
statistical model can result in reduced ranging error with shorter data
acquisition time when operating in the high-flux regime. Specifically, we show
that the empirical distribution of detection times converges to the stationary
distribution of a Markov chain. Depth estimation can then be performed by
passing the empirical distribution through a filter matched to the stationary
distribution. Moreover, based on the Markov chain model, we formulate the
recovery of arrival distribution from detection distribution as a nonlinear
inverse problem and solve it via provably convergent mathematical optimization.
By comparing per-detection Fisher information for depth estimation from high-
and low-flux detection time distributions, we provide an analytical basis for
possible improvement of ranging performance resulting from the presence of dead
time. Finally, we demonstrate the effectiveness of our formulation and
algorithm via simulations of lidar ranging.Comment: Revision with added estimation results, references, and figures, and
modified appendice
Calibration of weirs by means of critical flow and specific energy
For many years man has been relying more each day upon metering devices to measure the flow of fluids in industry. The orifice, orifice meter, venturi meter, pitot tube, nozzle, flume and the weir have been employed, each having its own particular advantage.
In this parade of metering progress, the contracted weir has long been the forgotten brother of the suppressed weir. In almost every textbook on hydraulics or fluid mechanics, the following words appear, end contractions are to be avoided where the weir cannot be calibrated . This skepticism and the length or absurdity of some contracted weir formulas led to the development of this study --Introduction, page 1
- …